42 research outputs found

    Discrete Anatomical Coordinates for Speech Production and Synthesis

    Get PDF
    The sounds of all languages are described by a finite set of symbols, which are extracted from the continuum of sounds produced by the vocal organ. How the discrete phonemic identity is encoded in the continuous movements producing speech remains an open question for the experimental phonology. In this work, this question is assessed by using Hall-effect transducers and magnets—mounted on the tongue, lips, and jaw—to track the kinematics of the oral tract during the vocalization of vowel-consonant-vowel structures. Using a threshold strategy, the time traces of the transducers were converted into discrete motor coordinates unambiguously associated with the vocalized phonemes. Furthermore, the signals of the transducers combined with the discretization strategy were used to drive a low-dimensional vocal model capable of synthesizing intelligible speech. The current work not only assesses a relevant inquiry of the biology of language, but also demonstrates the performance of the experimental technique to monitor the displacement of the main articulators of the vocal tract while speaking. This novel electronic device represents an economic and portable option to the standard systems used to study the vocal tract movements

    Motor representations underlie the reading of unfamiliar letter combinations

    Get PDF
    Silent reading is a cognitive operation that produces verbal content with no vocal output. One relevant question is the extent to which this verbal content is processed as overt speech in the brain. To address this, we acquired sound, eye trajectories and lips’ dynamics during the reading of consonant-consonant-vowel (CCV) combinations which are infrequent in the language. We found that the duration of the first fixations on the CCVs during silent reading correlate with the duration of the transitions between consonants when the CCVs are actually uttered. With the aid of an articulatory model of the vocal system, we show that transitions measure the articulatory effort required to produce the CCVs. This means that first fixations during silent reading are lengthened when the CCVs require a greater laryngeal and/or articulatory effort to be pronounced. Our results support that a speech motor code is used for the recognition of infrequent text strings during silent reading.Fil: Taitz, Alan. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Assaneo, M. Florencia. University of New York; Estados UnidosFil: Shalóm, Diego Edgar. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Trevisan, Marcos Alberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentin

    Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning

    Get PDF
    We introduce a deceptively simple behavioral task that robustly identifies two qualitatively different groups within the general population. When presented with an isochronous train of random syllables, some listeners are compelled to align their own concurrent syllable production with the perceived rate, whereas others remain impervious to the external rhythm. Using both neurophysiological and structural imaging approaches, we show group differences with clear consequences for speech processing and language learning. When listening passively to speech, high synchronizers show increased brain-to-stimulus synchronization over frontal areas, and this localized pattern correlates with precise microstructural differences in the white matter pathways connecting frontal to auditory regions. Finally, the data expose a mechanism that underpins performance on an ecologically relevant word-learning task. We suggest that this task will help to better understand and characterize individual performance in speech processing and language learning

    Repita la sílaba “ta” y le diremos cómo funciona su cerebro

    Get PDF
    Un atributo innato en los seres humanos es la habilidad de sincronizar nuestros movimientos con los sonidos que percibimos. Imaginemos, por ejemplo, cuando movemos el pie o la cabeza al ritmo de una canción. Este fenómeno sucede sin esfuerzo ni entrenamiento previo: ¡incluso los bebés lo hacen

    Neural oscillations are a start toward understanding brain activity rather than the end.

    No full text
    Does rhythmic neural activity merely echo the rhythmic features of the environment, or does it reflect a fundamental computational mechanism of the brain? This debate has generated a series of clever experimental studies attempting to find an answer. Here, we argue that the field has been obstructed by predictions of oscillators that are based more on intuition rather than biophysical models compatible with the observed phenomena. What follows is a series of cautionary examples that serve as reminders to ground our hypotheses in well-developed theories of oscillatory behavior put forth by theoretical study of dynamical systems. Ultimately, our hope is that this exercise will push the field to concern itself less with the vague question of "oscillation or not" and more with specific biophysical models that can be readily tested

    Adaptive oscillators provide a hard-coded Bayesian mechanism for rhythmic inference

    No full text
    Posted July 08, 2022 on bioRxiv.Bayesian theories of perception suggest that the human brain internalizes a model of environmental patterns to reduce sensory noise and improve stimulus processing. The internalization of external regularities is particularly manifest in the time domain: humans excel at predictively synchronizing their behavior with external rhythms, as in dance or music performance. The neural processes underlying rhythmic inferences are debated: whether predictive perception relies on high-level generative models or whether it can readily be implemented locally by hard-coded intrinsic oscillators synchronizing to rhythmic input remains unclear. Here, we propose that these seemingly antagonistic accounts can be conceptually reconciled. In this view, neural oscillators may constitute hard-coded physiological priors – in a Bayesian sense – that reduce temporal uncertainty and facilitate the predictive processing of noisy rhythms. To test this, we asked human participants to track pseudo-rhythmic tone sequences and assess whether the final tone was early or late. Using a Bayesian model, we account for various aspects of participants’ performance and demonstrate that the classical distinction between absolute and relative mechanisms can be unified under this framework. Next, using a dynamical systems perspective, we successfully model this behavior using an adaptive frequency oscillator which adjusts its spontaneous frequency based on the rate of stimuli. This model better reflects human behavior than a canonical nonlinear oscillator and a predictive ramping model, both widely used for temporal estimation and prediction. Our findings suggest that an oscillator may be considered useful as a potential heuristic for a rhythmic prior in the Bayesian sense. Together, the results show that adaptive oscillators provide an elegant and biologically plausible means to subserve (bayesian) rhythmic inference, thereby reconciling numerous empirical observations and a priori incompatible frameworks for temporal inferential processes

    Auditory-motor synchronization varies among individuals and is critically shaped by acoustic features

    No full text
    Abstract The ability to synchronize body movements with quasi-regular auditory stimuli represents a fundamental trait in humans at the core of speech and music. Despite the long trajectory of the study of such ability, little attention has been paid to how acoustic features of the stimuli and individual differences can modulate auditory-motor synchrony. Here, by exploring auditory-motor synchronization abilities across different effectors and types of stimuli, we revealed that this capability is more restricted than previously assumed. While the general population can synchronize to sequences composed of the repetitions of the same acoustic unit, the synchrony in a subgroup of participants is impaired when the unit’s identity varies across the sequence. In addition, synchronization in this group can be temporarily restored by being primed by a facilitator stimulus. Auditory-motor integration is stable across effectors, supporting the hypothesis of a central clock mechanism subserving the different articulators but critically shaped by the acoustic features of the stimulus and individual abilities

    Decoding imagined speech reveals speech planning and production mechanisms

    No full text
    Speech imagery (the ability to generate internally quasi-perceptual experiences of speech) is a fundamental ability linked to cognitive functions such as inner speech, phonological working memory, and predictive processing. Speech imagery is also considered an ideal tool to test theories of overt speech. The study of speech imagery is challenging, primarily because of the absence of overt behavioral output as well as the difficulty in temporally aligning imagery events across trials and individuals. We used magnetoencephalography (MEG) paired with temporal-generalization-based neural decoding and a simple behavioral protocol to determine the processing stages underlying speech imagery. We monitored participants’ lip and jaw micromovements during mental imagery of syllable production using electromyography. Decoding participants’ imagined syllables revealed a sequence of task-elicited representations. Importantly, participants’ micromovements did not discriminate between syllables. The decoded sequence of neuronal patterns maps well onto the predictions of current computational models of overt speech motor control and provides evidence for hypothesized internal and external feedback loops for speech planning and production, respectively. Additionally, the results expose the compressed nature of representations during planning which contrasts with the natural rate at which internal productions unfold. We conjecture that the same sequence underlies the motor-based generation of sensory predictions that modulate speech perception as well as the hypothesized articulatory loop of phonological working memory. The results underscore the potential of speech imagery, based on new experimental approaches and analytical methods, and further pave the way for successful non-invasive brain-computer interfaces

    Adaptive oscillators support Bayesian prediction in temporal processing.

    No full text
    Humans excel at predictively synchronizing their behavior with external rhythms, as in dance or music performance. The neural processes underlying rhythmic inferences are debated: whether predictive perception relies on high-level generative models or whether it can readily be implemented locally by hard-coded intrinsic oscillators synchronizing to rhythmic input remains unclear and different underlying computational mechanisms have been proposed. Here we explore human perception for tone sequences with some temporal regularity at varying rates, but with considerable variability. Next, using a dynamical systems perspective, we successfully model the participants behavior using an adaptive frequency oscillator which adjusts its spontaneous frequency based on the rate of stimuli. This model better reflects human behavior than a canonical nonlinear oscillator and a predictive ramping model-both widely used for temporal estimation and prediction-and demonstrate that the classical distinction between absolute and relative computational mechanisms can be unified under this framework. In addition, we show that neural oscillators may constitute hard-coded physiological priors-in a Bayesian sense-that reduce temporal uncertainty and facilitate the predictive processing of noisy rhythms. Together, the results show that adaptive oscillators provide an elegant and biologically plausible means to subserve rhythmic inference, reconciling previously incompatible frameworks for temporal inferential processes

    Magnetoencephalography and Language

    No full text
    International audienc
    corecore